116 research outputs found

    Ranking of library and information science researchers: Comparison of data sources for correlating citation data, and expert judgments

    Get PDF
    This paper studies the correlations between peer review and citation indicators when evaluating research quality in library and information science (LIS). Forty-two LIS experts provided judgments on a 5-point scale of the quality of research published by 101 scholars; the median rankings resulting from these judgments were then correlated with h-, g- and H-index values computed using three different sources of citation data: Web of Science (WoS), Scopus and Google Scholar (GS). The two variants of the basic h-index correlated more strongly with peer judgment than did the h-index itself; citation data from Scopus was more strongly correlated with the expert judgments than was data from GS, which in turn was more strongly correlated than data from WoS; correlations from a carefully cleaned version of GS data were little different from those obtained using swiftly gathered GS data; the indices from the citation databases resulted in broadly similar rankings of the LIS academics; GS disadvantaged researchers in bibliometrics compared to the other two citation database while WoS disadvantaged researchers in the more technical aspects of information retrieval; and experts from the UK and other European countries rated UK academics with higher scores than did experts from the USA. (C) 2010 Elsevier Ltd. All rights reserved

    User participation in an academic social networking service: A survey of open group users on Mendeley

    Get PDF
    Although there are a number of social networking services that specifically target scholars, little has been published about the actual practices and the usage of these so-called academic social networking services (ASNSs). To fill this gap, we explore the populations of academics who engage in social activities using an ASNS; as an indicator of further engagement, we also determine their various motivations for joining a group in ASNSs. Using groups and their members in Mendeley as the platform for our case study, we obtained 146 participant responses from our online survey about users' common activities, usage habits, and motivations for joining groups. Our results show that (a) participants did not engage with social-based features as frequently and actively as they engaged with research-based features, and (b) users who joined more groups seemed to have a stronger motivation to increase their professional visibility and to contribute the research articles that they had read to the group reading list. Our results generate interesting insights into Mendeley's user populations, their activities, and their motivations relative to the social features of Mendeley. We also argue that further design of ASNSs is needed to take greater account of disciplinary differences in scholarly communication and to establish incentive mechanisms for encouraging user participation

    Metrics to evaluate research performance in academic institutions: A critique of ERA 2010 as applied in forestry and the indirect H2 index as a possible alternative

    Full text link
    Excellence for Research in Australia (ERA) is an attempt by the Australian Research Council to rate Australian universities on a 5-point scale within 180 Fields of Research using metrics and peer evaluation by an evaluation committee. Some of the bibliometric data contributing to this ranking suffer statistical issues associated with skewed distributions. Other data are standardised year-by-year, placing undue emphasis on the most recent publications which may not yet have reliable citation patterns. The bibliometric data offered to the evaluation committees is extensive, but lacks effective syntheses such as the h-index and its variants. The indirect H2 index is objective, can be computed automatically and efficiently, is resistant to manipulation, and a good indicator of impact to assist the ERA evaluation committees and to similar evaluations internationally.Comment: 19 pages, 6 figures, 7 tables, appendice

    Not all international collaboration is beneficial: The Mendeley readership and citation impact of biochemical research collaboration

    Get PDF
    This is an accepted manuscript of an article published by Wiley Blackwell in Journal of the Association for Information Science and Technology on 13/05/2015, available online: https://doi.org/10.1002/asi.23515 The accepted version of the publication may differ from the final published version.Biochemistry is a highly funded research area that is typified by large research teams and is important for many areas of the life sciences. This article investigates the citation impact and Mendeley readership impact of biochemistry research from 2011 in the Web of Science according to the type of collaboration involved. Negative binomial regression models are used that incorporate, for the first time, the inclusion of specific countries within a team. The results show that, holding other factors constant, larger teams robustly associate with higher impact research, but including additional departments has no effect and adding extra institutions tends to reduce the impact of research. Although international collaboration is apparently not advantageous in general, collaboration with the USA, and perhaps also with some other countries, seems to increase impact. In contrast, collaborations with some other nations associate with lower impact, although both findings could be due to factors such as differing national proportions of excellent researchers. As a methodological implication, simpler statistical models would have found international collaboration to be generally beneficial and so it is important to take into account specific countries when examining collaboration

    The impact of Cochrane Systematic Reviews : a mixed method evaluation of outputs from Cochrane Review Groups supported by the UK National Institute for Health Research

    Get PDF
    © 2014 Bunn et al.; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.Background: There has been a growing emphasis on evidence-informed decision making in health care. Systematic reviews, such as those produced by the Cochrane Collaboration, have been a key component of this movement. The UK National Institute for Health Research (NIHR) Systematic Review Programme currently supports 20 Cochrane Review Groups (CRGs). The aim of this study was to identify the impacts of Cochrane reviews published by NIHR funded CRGs during the years 2007-11. Methods: We sent questionnaires to CRGs and review authors, interviewed guideline developers and used bibliometrics and documentary review to get an overview of CRG impact and to evaluate the impact of a sample of 60 Cochrane reviews. We used a framework with four categories (knowledge production, research targeting, informing policy development, and impact on practice/services). Results: A total of 1502 new and updated reviews were produced by the 20 NIHR funded CRGs between 2007-11. The clearest impacts were on policy with a total of 483 systematic reviews cited in 247 sets of guidance; 62 were international, 175 national (87 from the UK) and 10 local. Review authors and CRGs provided some examples of impact on practice or services, for example safer use of medication, the identification of new effective drugs or treatments and potential economic benefits through the reduction in the use of unproven or unnecessary procedures. However, such impacts are difficult to objectively document and the majority of reviewers were unsure if their review had produced specific impacts. Qualitative data suggested that Cochrane reviews often play an instrumental role in informing guidance although a poor fit with guideline scope or methods, reviews being out of date and a lack of communication between CRGs and guideline developers were barriers to their use. Conclusions: Health and economic impacts of research are generally difficult to measure. We found that to be the case with this evaluation. Impacts on knowledge production and clinical guidance were easier to identify and substantiate than those on clinical practice. Questions remain about how we define and measure impact and more work is needed to develop suitable methods for impact analysis.Peer reviewe

    ResearchGate versus Google Scholar: Which finds more early citations?

    Get PDF
    ResearchGate has launched its own citation index by extracting citations from documents uploaded to the site and reporting citation counts on article profile pages. Since authors may upload preprints to ResearchGate, it may use these to provide early impact evidence for new papers. This article assesses the whether the number of citations found for recent articles is comparable to other citation indexes using 2675 recently-published library and information science articles. The results show that in March 2017, ResearchGate found less citations than did Google Scholar but more than both Web of Science and Scopus. This held true for the dataset overall and for the six largest journals in it. ResearchGate correlated most strongly with Google Scholar citations, suggesting that ResearchGate is not predominantly tapping a fundamentally different source of data than Google Scholar. Nevertheless, preprint sharing in ResearchGate is substantial enough for authors to take seriously

    Patent citation analysis with Google

    Get PDF
    This is an accepted manuscript of an article published by Wiley-Blackwell in Journal of the Association for Information Science and Technology on 23/09/2015, available online: https://doi.org/10.1002/asi.23608 The accepted version of the publication may differ from the final published version.Citations from patents to scientific publications provide useful evidence about the commercial impact of academic research, but automatically searchable databases are needed to exploit this connection for large-scale patent citation evaluations. Google covers multiple different international patent office databases but does not index patent citations or allow automatic searches. In response, this article introduces a semiautomatic indirect method via Bing to extract and filter patent citations from Google to academic papers with an overall precision of 98%. The method was evaluated with 322,192 science and engineering Scopus articles from every second year for the period 1996–2012. Although manual Google Patent searches give more results, especially for articles with many patent citations, the difference is not large enough to be a major problem. Within Biomedical Engineering, Biotechnology, and Pharmacology & Pharmaceutics, 7% to 10% of Scopus articles had at least one patent citation but other fields had far fewer, so patent citation analysis is only relevant for a minority of publications. Low but positive correlations between Google Patent citations and Scopus citations across all fields suggest that traditional citation counts cannot substitute for patent citations when evaluating research

    Coverage of highly-cited documents in Google Scholar, Web of Science, and Scopus: a multidisciplinary comparison

    Get PDF
    This study explores the extent to which bibliometric indicators based on counts of highly-cited documents could be affected by the choice of data source. The initial hypothesis is that databases that rely on journal selection criteria for their document coverage may not necessarily provide an accurate representation of highly-cited documents across all subject areas, while inclusive databases, which give each document the chance to stand on its own merits, might be better suited to identify highly-cited documents. To test this hypothesis, an analysis of 2,515 highly-cited documents published in 2006 that Google Scholar displays in its Classic Papers product is carried out at the level of broad subject categories, checking whether these documents are also covered in Web of Science and Scopus, and whether the citation counts offered by the different sources are similar. The results show that a large fraction of highly-cited documents in the Social Sciences and Humanities (8.6%-28.2%) are invisible to Web of Science and Scopus. In the Natural, Life, and Health Sciences the proportion of missing highly-cited documents in Web of Science and Scopus is much lower. Furthermore, in all areas, Spearman correlation coefficients of citation counts in Google Scholar, as compared to Web of Science and Scopus citation counts, are remarkably strong (.83-.99). The main conclusion is that the data about highly-cited documents available in the inclusive database Google Scholar does indeed reveal significant coverage deficiencies in Web of Science and Scopus in several areas of research. Therefore, using these selective databases to compute bibliometric indicators based on counts of highly-cited documents might produce biased assessments in poorly covered areas.Alberto Martín-Martín enjoys a four-year doctoral fellowship (FPU2013/05863) granted by the Ministerio de Educación, Cultura, y Deportes (Spain)

    Caveats for the Use of Citation Indicators in Research and Journal Evaluations

    Full text link
    Ageing of publications, percentage of self-citations, and impact vary from journal to journal within fields of science. The assumption that citation and publication practices are homogenous within specialties and fields of science is invalid. Furthermore, the delineation of fields and among specialties is fuzzy. Institutional units of analysis and persons may move between fields or span different specialties. The match between the citation index and institutional profiles varies among institutional units and nations. The respective matches may heavily affect the representation of the units. Non-ISI journals are increasingly cornered into "transdisciplinary" Mode-2 functions with the exception of specialist journals publishing in languages other than English. An "externally cited impact factor" can be calculated for these journals. The citation impact of non-ISI journals will be demonstrated using Science and Public Policy as the example
    • …
    corecore